🛠️ All DevTools

Showing 101–120 of 4274 tools

Last Updated
April 22, 2026 at 08:00 PM

[Other] Show HN: Xit – a Git-compatible VCS written in Zig The marquee feature is patch-based merging, similar to Darcs and Pijul. I think xit is the first version control system (VCS) to have this feature while still being git compatible. See the 100% human-written readme for more.

Found: April 15, 2026 ID: 4172

[Other] Want to Write a Compiler? Just Read These Two Papers (2008)

Found: April 15, 2026 ID: 4169

[Other] Direct Win32 API, Weird-Shaped Windows, and Why They Mostly Disappeared

Found: April 15, 2026 ID: 4167

[CLI Tool] Wacli – WhatsApp CLI: sync, search, send

Found: April 15, 2026 ID: 4168

[Other] CadQuery is an open-source Python library for building 3D CAD models

Found: April 14, 2026 ID: 4204

[Other] Your codebase doesn't care how it got written

Found: April 14, 2026 ID: 4164

[Other] Show HN: Plain – The full-stack Python framework designed for humans and agents

Found: April 14, 2026 ID: 4154

[Other] Turn your best AI prompts into one-click tools in Chrome

Found: April 14, 2026 ID: 4152

Claude Code Routines

Hacker News (score: 360)

[Other] Claude Code Routines

Found: April 14, 2026 ID: 4157

5NF and Database Design

Hacker News (score: 157)

[Database] 5NF and Database Design

Found: April 14, 2026 ID: 4163

[Monitoring/Observability] Show HN: Kelet – Root Cause Analysis agent for your LLM apps I&#x27;ve spent the past few years building 50+ AI agents in prod (some reached 1M+ sessions&#x2F;day), and the hardest part was never building them — it was figuring out why they fail.<p>AI agents don&#x27;t crash. They just quietly give wrong answers. You end up scrolling through traces one by one, trying to find a pattern across hundreds of sessions.<p>Kelet automates that investigation. Here&#x27;s how it works:<p>1. You connect your traces and signals (user feedback, edits, clicks, sentiment, LLM-as-a-judge, etc.) 2. Kelet processes those signals and extracts facts about each session 3. It forms hypotheses about what went wrong in each case 4. It clusters similar hypotheses across sessions and investigates them together 5. It surfaces a root cause with a suggested fix you can review and apply<p>The key insight: individual session failures look random. But when you cluster the hypotheses, failure patterns emerge.<p>The fastest way to integrate is through the Kelet Skill for coding agents — it scans your codebase, discovers where signals should be collected, and sets everything up for you. There are also Python and TypeScript SDKs if you prefer manual setup.<p>It’s currently free during beta. No credit card required. Docs: <a href="https:&#x2F;&#x2F;kelet.ai&#x2F;docs&#x2F;" rel="nofollow">https:&#x2F;&#x2F;kelet.ai&#x2F;docs&#x2F;</a><p>I&#x27;d love feedback on the approach, especially from anyone running agents in prod. Does automating the manual error analysis sound right?

Found: April 14, 2026 ID: 4159

[Database] Show HN: A memory database that forgets, consolidates, and detects contradiction Vector databases store memories. They don&#x27;t manage them. After 10k memories, recall quality degrades because there&#x27;s no consolidation, no forgetting, no conflict resolution. Your AI agent just gets noisier.<p>YantrikDB is a cognitive memory engine — embed it, run it as a server, or connect via MCP. It thinks about what it stores: consolidation collapses duplicate memories, contradiction detection flags incompatible facts, temporal decay with configurable half-life lets unimportant memories fade like human memory does.<p>Single Rust binary. HTTP + binary wire protocol. 2-voter + 1-witness HA cluster via Docker Compose or Kubernetes. Chaos-tested failover, runtime deadlock detection (parking_lot), per-tenant quotas, Prometheus metrics. Ran a 42-task hardening sprint last week — 1178 core tests, cargo-fuzz targets, CRDT property tests, 5 ops runbooks.<p>Live on a 3-node Proxmox homelab cluster with multiple tenants. Alpha — primary user is me, looking for the second one.

Found: April 14, 2026 ID: 4155

[Other] Show HN: MōBrowser, a TypeScript-first desktop app framework with typed IPC Hi HN,<p>For the last ~15 years I&#x27;ve worked on embedding web browsers into Java and .NET desktop apps (JxBrowser, DotNetBrowser). Over time, I watched many teams move from embedding web views into native apps, to building full desktop apps with frameworks like Electron and Tauri.<p>Both are useful, but in practice I kept running into several problems.<p>With Electron, beyond the larger app footprint, I often ran into:<p><pre><code> - lack of type-safe IPC - no source code protection - weak support for the modern web stack </code></pre> Tauri solves some problems (like app size), but introduces others:<p><pre><code> - different WebViews across platforms → inconsistent behavior - requires Rust + JS instead of a single stack </code></pre> So we built MōBrowser, a framework for building desktop apps with TypeScript, Node.js, and Chromium.<p>Some of the things we focused on:<p><pre><code> - typed IPC using Protobuf + code generation (RPC-style communication instead of string channels) - consistent rendering and behavior across different platforms - Node.js runtime - built-in packaging, updates, and scaffolding - source code protection - small delta auto-updates </code></pre> The goal is to let web developers ship desktop apps with a web stack they already know and fewer cross-platform surprises.<p>I&#x27;d especially love feedback from people who have built production apps with Electron or Tauri.<p>Happy to answer any questions.

Found: April 14, 2026 ID: 4161

[Other] Show HN: LangAlpha – what if Claude Code was built for Wall Street? Some technical context on what we ran into building this.<p>MCP tools don&#x27;t really work for financial data at scale. One tool call for five years of daily prices dumps tens of thousands of tokens into the context window. And data vendors pack dozens of tools into a single MCP server, schemas alone can eat 50k+ tokens before the agent does anything useful. So we auto-generate typed Python modules from the MCP schemas at workspace init and upload them into the sandbox. The agent just imports them like a normal library. Only a one-line summary per server stays in the prompt. We have around 80 tools across our servers and the prompt cost is the same whether a server has 3 tools or 30. This part isn&#x27;t finance-specific, it works with any MCP server.<p>The other big thing was making research actually persist across sessions. Most agents treat a single deliverable (a PDF, a spreadsheet) as the end goal. In investing that&#x27;s day one. You update the model when earnings drop, re-run comps when a competitor reports, keep layering new analysis on old. But try doing that across agent sessions, files don&#x27;t carry over, you re-paste context every time. So we built everything around workspaces. Each one maps to a persistent sandbox, one per research goal. The agent maintains its own memory file with findings and a file index that gets re-read before every LLM call. Come back a week later, start a new thread, it picks up where it left off.<p>We also wanted the agent to have real domain context the way Claude Code has codebase context. Portfolio, watchlist, risk tolerance, financial data sources, all injected into every call. Existing AI investing platforms have some of that but nothing close to what a proper agent harness can do. We wanted both and couldn&#x27;t find it, so we built it and open-sourced the whole thing.

Found: April 14, 2026 ID: 4153

[Other] Show HN: Remoroo. trying to fix memory in long-running coding agents I built Remoroo because most coding agents fall apart once the work stops being a short edit-and-run loop.<p>A real engineering experiment can run for hours. Along the way, the agent reads files, runs commands, checks logs, compares metrics, tries ideas that fail, and needs to remember what already happened. Once context starts slipping, it forgets the goal, loses track of the baseline, and retries bad ideas.<p>Remoroo is my attempt to solve that problem.<p>You point it at a repo and give it a measurable goal. It runs locally, tries changes, executes experiments, measures the result, keeps what helps, and throws away what does not.<p>A big part of the system is memory. Long runs generate far more context than a model can hold, so I built a demand-paging memory system inspired by OS virtual memory to keep the run coherent over time.<p>There is a technical writeup here: <a href="https:&#x2F;&#x2F;www.remoroo.com&#x2F;blog&#x2F;how-remoroo-works" rel="nofollow">https:&#x2F;&#x2F;www.remoroo.com&#x2F;blog&#x2F;how-remoroo-works</a><p>Would love feedback from people working on long-running agents, training loops, eval harnesses, or similar workflows.

Found: April 14, 2026 ID: 4213

[CLI Tool] Show HN: Kontext CLI – Credential broker for AI coding agents in Go We built the Kontext CLI because AI coding agents need access to GitHub, Stripe, databases, and dozens of other services — and right now most teams handle this by copy-pasting long-lived API keys into .env files, or the actual chat interface, whilst hoping for the best.<p>The problem isn&#x27;t just secret sprawl. It&#x27;s that there&#x27;s no lineage of access. You don&#x27;t know which developer launched which agent, what it accessed, or whether it should have been allowed to. The moment you hand raw credentials to a process, you&#x27;ve lost the ability to enforce policy, audit access, or rotate without pain. The credential is the authorization, and that&#x27;s fundamentally broken when autonomous agents are making hundreds of API calls per session.<p>Kontext takes a different approach. You declare what credentials a project needs in a .env.kontext file:<p><pre><code> GITHUB_TOKEN={{kontext:github}} STRIPE_KEY={{kontext:stripe}} LINEAR_TOKEN={{kontext:linear}} </code></pre> Then run `kontext start --agent claude`. The CLI authenticates you via OIDC, and for each placeholder: if the service supports OAuth, it exchanges the placeholder for a short-lived access token via RFC 8693 token exchange; for static API keys, the backend injects the credential directly into the agent&#x27;s runtime environment. Either way, secrets exist only in memory during the session — never written to disk on your machine. Every tool call is streamed for audit as the agent runs.<p>The closest analogy is a Security Token Service (STS): you authenticate once, and the backend mints short-lived, scoped credentials on-the-fly — except unlike a classical STS, we hold the upstream secrets, so nothing long-lived ever reaches the agent. The backend holds your OAuth refresh tokens and API keys; the CLI never sees them. It gets back short-lived access tokens scoped to the session.<p>What the CLI captures for every tool call: what the agent tried to do, what happened, whether it was allowed, and who did it — attributed to a user, session, and org.<p>Install with one command: `brew install kontext-dev&#x2F;tap&#x2F;kontext`<p>The CLI is written in Go (~5ms hook overhead per tool call), uses ConnectRPC for backend communication, and stores auth in the system keyring. Works with Claude Code today, Codex support coming soon.<p>We&#x27;re working on server-side policy enforcement next — the infrastructure for allow&#x2F;deny decisions on every tool call is already wired, we just need to close the loop so tool calls can also be rejected.<p>We&#x27;d love feedback on the approach. Especially curious: how are teams handling credential management for AI agents today? Are you just pasting env vars into the agent chat, or have you found something better?<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;kontext-dev&#x2F;kontext-cli" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kontext-dev&#x2F;kontext-cli</a> Site: <a href="https:&#x2F;&#x2F;kontext.security" rel="nofollow">https:&#x2F;&#x2F;kontext.security</a>

Found: April 14, 2026 ID: 4150

jj – the CLI for Jujutsu

Hacker News (score: 337)

[CLI Tool] jj – the CLI for Jujutsu

Found: April 14, 2026 ID: 4149

[CLI Tool] Show HN: A CLI that writes its own integration code We run superglue, an OSS agentic integration platform. Last week I talked to a founder of another YC startup. She found a use case for our CLI that we hadn&#x27;t officially launched yet.<p>Her problem: customers wanted to create Opps in Salesforce from inside the chat in her app. We kept seeing this pattern: teams build agents and their users can perfectly describe what they want: &quot;pull these three objects from Salesforce and push to nCino when X condition is true&quot;, but translating that into a generalized hard-coded tool the agent can call is a lot of work and does not scale since the logic is different for every user.<p>What superglue CLI does: you point it at any API, and your agent gets the ability to reason over that API at runtime. No pre-built tools. The agent reads the spec, plans the calls, executes them.<p>The founder using this in production described it like this: she gave the CLI to her agent with an instruction set and told it not to build tools, just run against the API. It handled multi-step Salesforce object creation correctly, including per-user field logic and record type templates.<p>Concretely: instead of writing a createSalesforceOpp tool that handles contact -&gt; account -&gt; Opp creation with all the conditional logic, you write a skill doc and let the agent figure out which endpoints to hit and in what order.<p>The tradeoff is: you&#x27;re giving the agent more autonomy over what API calls it makes. That requires good instructions and some guardrails. But for long-tail, user-specific connectors, it&#x27;s a lot more practical than building a tool for every case.<p>Happy to discuss. Curious if others have run into the &quot;pre-defined tool&quot; ceiling with MCP-based connectors and how you&#x27;ve worked around it.<p>Docs: <a href="https:&#x2F;&#x2F;docs.superglue.cloud&#x2F;getting-started&#x2F;cli-skills" rel="nofollow">https:&#x2F;&#x2F;docs.superglue.cloud&#x2F;getting-started&#x2F;cli-skills</a> Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;superglue-ai&#x2F;superglue" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;superglue-ai&#x2F;superglue</a>

Found: April 14, 2026 ID: 4146

[Other] Show HN: A stateful UI runtime for reactive web apps in Go Doors: Server-driven UI framework + runtime for building stateful, reactive web applications in Go.<p>Some highlights:<p>* Front-end framework capabilities in server-side Go. Reactive state primitives, dynamic routing, composable components.<p>* No public API layer. No endpoint design needed, private temporal transport is handled under the hood.<p>* Unified control flow. No context switch between back-end&#x2F;front-end.<p>* Integrated web stack. Bundle assets, build scripts, serve private files, automate CSP, and ship in one binary.<p>How it works: Go server is UI runtime: web application runs on a stateful server, while the browser acts as a remote renderer and input layer.<p>Security model: Every user can interact only with what you render to them. Means you check permissions when your render the button and that&#x27;s is enough to be sure that related action wont be triggered by anyone else.<p>Mental model: Link DOM to the data it depends on.<p>Limitations:<p>* Does not make sense for static non-iteractive sites, client-first apps with simple routing, and is not suitable for offline PWAs.<p>* Load balancing and roll-outs without user interruption require different strategies with stateful server (mechanics to make it simpler is included).<p>Where it fits best: Apps with heavy user flows and complex business logic. Single execution context and no API&#x2F;endpoint permission management burden makes it easier.<p>Peculiarities:<p>* Purposely build [Go language extension](<a href="https:&#x2F;&#x2F;github.com&#x2F;doors-dev&#x2F;gox" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;doors-dev&#x2F;gox</a>) with its own LSP, parser, and editor plugins. Adds HTML as Go expressions and \`elem\` primitives.<p>* Custom concurrency engine that enables non-blocking event processing, parallel rendering, and tree-aware state propagation<p>* HTTP&#x2F;3-ready synchronization protocol (rolling-request + streaming, events via regular post, no WebSockets&#x2F;SSE)<p>From the author (me): It took me 1 year and 9 month to get to this stage. I rewrote the framework 6 or 7 times until every part is coherent, every decision feels right or is a reasonable compromise. I am very critical to my own work and I see flaws, but overall it turned out solid, I like developer experience as a user. Mental model requires a bit of thinking upfront, but pays off with explicit code and predictable outcome.<p>Code Example:<p><pre><code> type Search struct { input doors.Source[string] &#x2F;&#x2F; reactive state } elem (s Search) Main() { &lt;input (doors.AInput{ On: func(ctx context.Context, r doors.RequestInput) bool { s.input.Update(ctx, r.Event().Value) &#x2F;&#x2F; reactive state return false }, }) type=&quot;text&quot; placeholder=&quot;search&quot;&gt; ~&#x2F;&#x2F; subscribe results to state changes ~(doors.Sub(s.input, s.results)) } elem (s Search) results(input string) { ~(for _, user := range Users.Search(input) { &lt;card&gt; ~(user.Name) &lt;&#x2F;card&gt; }) }</code></pre>

Found: April 14, 2026 ID: 4145

[Other] Show HN: CodeBurn – Analyze Claude Code token usage by task Built this after realizing I was spending ~$1400&#x2F;week on Claude Code with almost no visibility into what was actually consuming tokens.<p>Tools like ccusage give a cost breakdown per model and per day, but I wanted to understand usage at the task level.<p>CodeBurn reads the JSONL session transcripts that Claude Code stores locally (~&#x2F;.claude&#x2F;projects&#x2F;) and classifies each turn into 13 categories based on tool usage patterns (no LLM calls involved).<p>One surprising result: about 56% of my spend was on conversation turns with no tool usage. Actual coding (edits&#x2F;writes) was only ~21%.<p>The interface is an interactive terminal UI built with Ink (React for terminals), with gradient bar charts, responsive panels, and keyboard navigation. There’s also a SwiftBar menu bar integration for macOS.<p>Happy to hear feedback or ideas.

Found: April 13, 2026 ID: 4191
Previous Page 6 of 214 Next